Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 15 de 15
Filter
1.
European Respiratory Journal Conference: European Respiratory Society International Congress, ERS ; 60(Supplement 66), 2022.
Article in English | EMBASE | ID: covidwho-2249081

ABSTRACT

Introduction: COVID-19 has affected more than 223 countries worldwide. There is a pressing need for non-invasive, low-costs and highly scalable solutions to detect COVID-19, especially in low-resource countries. Our aim was to develop a deep-learning model for identifying COVID-19 using voice data provided by the general population via personal devices. Method(s): We used the Cambridge University dataset consisting of 893 audio samples, crowd-sourced from 366 participants via the COVID-19 Sounds app (covid-19-sounds.org). Voice features were extracted using a Melspectrogram analysis. Using the voice data, we developed deep learning classification models to detect positive COVID-19 cases. These models included Long-Short Term Memory (LSTM) and Convolutional Neural Network (CNN). We compared their predictive power to baseline models (Logistic Regression and Support Vector Machine). Result(s): Fig. 1 shows model parameters and results. The LSTM model achieves the highest accuracy (84%), beating state of the art sound based models (72.1%). Conclusion(s): Deep-learning can detect subtle changes in the voice of COVID-19 patients. The sensitivity of our model shows a significant improvements compared to the antigen test (84% vs. 56.2%), yet with a lower specificity (83% vs. 99.5%). As an addition to other testing techniques, with a simple voice analysis this model may aid to fast diagnosise COVID-19 cases.

2.
J Affect Disord ; 325: 627-632, 2023 03 15.
Article in English | MEDLINE | ID: covidwho-2165450

ABSTRACT

BACKGROUND: Variations in speech intonation are known to be associated with changes in mental state over time. Behavioral vocal analysis is an algorithmic method of determining individuals' behavioral and emotional characteristics from their vocal patterns. It can provide biomarkers for use in psychiatric assessment and monitoring, especially when remote assessment is needed, such as in the COVID-19 pandemic. The objective of this study was to design and validate an effective prototype of automatic speech analysis based on algorithms for classifying the speech features related to MDD using a remote assessment system combining a mobile app for speech recording and central cloud processing for the prosodic vocal patterns. METHODS: Machine learning compared the vocal patterns of 40 patients diagnosed with MDD to the patterns of 104 non-clinical participants. The vocal patterns of 40 patients in the acute phase were also compared to 14 of these patients in the remission phase of MDD. RESULTS: A vocal depression predictive model was successfully generated. The vocal depression scores of MDD patients were significantly higher than the scores of the non-patient participants (p < 0.0001). The vocal depression scores of the MDD patients in the acute phase were significantly higher than in remission (p < 0.02). LIMITATIONS: The main limitation of this study is its relatively small sample size, since machine learning validity improves with big data. CONCLUSIONS: The computerized analysis of prosodic changes may be used to generate biomarkers for the early detection of MDD, remote monitoring, and the evaluation of responses to treatment.


Subject(s)
COVID-19 , Depressive Disorder, Major , Humans , Depressive Disorder, Major/diagnosis , Depressive Disorder, Major/epidemiology , Pandemics , Speech , Machine Learning
3.
2022 International Conference on Engineering and MIS, ICEMIS 2022 ; 2022.
Article in English | Scopus | ID: covidwho-2136253

ABSTRACT

The present COVID-19 diagnosis necessitates direct patient interaction, involves variable duration to get outcomes, and is costly. In certain poor nations, this is even unreachable to the populace at large, leading to a shortage of medical care. Therefore, a moderate, rapid, but also readily available method for the diagnosis of COVID-19 is essential. Several initiatives have been made to use smartphone-collected sounds and coughs to build machine learning algorithms that can categories and discriminate COVID-19 sounds with healthy tissue. The majority of prior studies used sounds like breathing or coughing to train their analyzers as well as get impressive outcomes. In order to carry out this significant investigation, we used this Coswara dataset, which contains recordings of nine distinct sound varieties of the COVID-19 state of cough, breathing, and speech. COVID-19 could be diagnosed more accurately using trained models on a variety of audio instead of a specific model trained on cough alone. This work examines the potential prospect of using machine learning techniques to enhance the identification of COVID-19 in such an initial and non-invasive manner through the monitoring of audio sounds. The XGBoost outperforms existing benchmark classification algorithms and achieves 92% accuracy with all sounds. Vowel/e/sound random forest with 98.36% was determined to be among the most effective, and the vowel/e/can also evaluated for the purpose of detecting compared to the other vowels;the impact of COVID-19 on sound quality is more precise. © 2022 IEEE.

4.
Psychosomatic Medicine ; 84(5):A84, 2022.
Article in English | EMBASE | ID: covidwho-2003412

ABSTRACT

Background: The importance to protect emergency department (ED) clinicians' mental health has been dramatically reinforced in the COVID-19 pandemic leading to a high prevalence of Posttraumatic Stress Disorder (PTSD) and other stress-associated adverse mental health effects in ED clinicians. This study proposes an innovative approach using digital phenotyping to develop Digital Biomarkers as predictors of stress pathologies. Furthermore, we determine how candidate digital biomarkers relate to physiological markers of chronic stress. Methods: We used computer vision and voice analysis to extract facial, voice, speech, and movement characteristics from an unstructured clinical interview. Previously, we tested the approach to identify digital biomarkers in a cohort of trauma survivors to discriminate PTSD. Here, we adapted the approach to test its potential to develop digital biomarkers as predictors of stress pathologies in ED clinicians. Results: Video- and audio-based markers were able to accurately discriminate PTSD (AUC=0.90) and depression status (AUC=0.86) in trauma survivors. Building on these results, we will present pilot findings from an ongoing longitudinal study of COVID-19 frontline workers. Conclusion: Digital biomarkers identified in direct clinical observation during free speech may be used to classify stress pathologies in ED clinicians. Digital biomarkers could improve the scalability and sensitivity of clinical assessments using low burden, passive evaluations of well-being, which is critical among this high-risk population.

5.
7th IEEE International conference for Convergence in Technology, I2CT 2022 ; 2022.
Article in English | Scopus | ID: covidwho-1992607

ABSTRACT

This paper develops an improved (more effective) and safer technology for detecting COVID-19 and thus contributes to the literature and the control of COVID-19. Coronavirus is a new infection that causes the coronavirus ailment called COVID-19. This disease was first found in bat at Wuhan, China, in December 2019. Starting from that time, it has spread rapidly throughout the globe. One of the main identifications of COVID-19 is that it can be handily distinguished by fever. Since this flare-up has begun, 'temperature screening utilizing infrared thermometers and RT-PCR has been utilized in advanced and developed countries to check the warmth of the body to identify the infected person. This is not a very effective way of detection, as it demands huge manpower and infrastructure to go and check one-by-one. Moreover, the close contact between the infected and the person checking can lead to the spread of coronavirus at a faster pace. This paper proposes a framework that can detect the coronavirus instantly and non-invasively from a human cough voice. The proposed framework is much safer as compared to conventional technologies used, as it reduces human interactions to a greater extent. It uses spectrographic images of the voice for COVID detection. This framework has been deployed in a web application to use them from any part of the world without exposing themselves to other infected people. This method encourages non-invasive mechanisms that will prevent from hurting sensitive areas, unlike conventional procedures. © 2022 IEEE.

6.
J Voice ; 2022 Jul 04.
Article in English | MEDLINE | ID: covidwho-1972237

ABSTRACT

INTRODUCTION: Covid-19 is an infectious disease with a different symptomatic implication depending on each person. There are sequelae in the nervous, cardiovascular, and/or digestive system that involve the approach and multidisciplinary work of different health professionals where the speech therapist is included. In this way, we can speak of a direct relationship between speech therapy and Covid-19; especially in those patients with serious sequelae such as the inability to eat and/or speak and the loss of voice. The damage caused to the laryngeal mucosa triggers the loss of some of the qualities of the voice, limiting oral communication. That is why we can find dysphonias caused by a great weakness, by a continuous overexertion or because of a paralysis of the vocal cords. OBJECTIVES/HYPOTHESIS: The objective of this study was to identify the patterns of behavior in the biomechanical correlates of people who passed Covid-19 symptomatically with sequelae in voice. METHODS: An experimental study with a total of 21 participants (11 women and 10 men) with sequelae in voice post Covid-19 is presented. Voice samples were collected and biomechanical correlates were analyzed through the Voice Clinical Systems program. RESULTS AND CONCLUSIONS: The results show different altered biomechanical patterns between men and women that correlate with other infectious diseases.

7.
Laryngo- Rhino- Otologie ; 101:S320, 2022.
Article in English | EMBASE | ID: covidwho-1967682

ABSTRACT

Introduction We report on three patients, who presented themselves at our clinic between February and June 2021 with impaired voice, which resulted in an aphonia after having had Covid-19 infection. Material & methods Indirect laryngoscopy and videostroboscopy were performed in all patients. The voice quality was limited in all patients. Voice analysis was performed perceptively (RHB scheme) and objectively by computer-assisted analysis (Göttingen hoarseness diagram, voice field). Self-assessment was performed using the Voice Handicap Index (VHI). Results Laryngoscopically, all patients showed laterally mobile vocal folds, non-irritant mucosal conditions and a wide glottis. All patients showed wide, irregular vibration amplitudes and incomplete glottis closure by videostroboscopy. Objective voice analysis revealed pathological values for the irregularity and noise components as well as the Dysphonia Severity Index (DSI). In the VHI all patients documented a high-grade voice disorder with a mean score > 62. Our patients continued to suffer from dysphonia 6-9 months after initial presentation. Voice therapy did not provide satisfactory voice improvement. Discussion Whether glottic hypofunction is due to sensorimotor dysfunction caused by neurotropic coronavirus remains a conjecture. In addition, the hy-pofunction may be related to the general reduced performance of the patients in post-covid-syndrome. Conclusion According to our literature research, this is the first description of dysphonia as a possible symptom in post-covid-syndrome.

8.
Int J Environ Res Public Health ; 19(11)2022 05 26.
Article in English | MEDLINE | ID: covidwho-1892854

ABSTRACT

This study investigates the effects of face masks on physiological and voice parameters, focusing on cyclists that perform incremental sports activity. Three healthy male subjects were monitored in a climatic chamber wearing three types of masks with different acoustic properties, breathing resistance, and air filtration performance. Masks A and B were surgical masks made of hydrophobic fabric and three layers of non-woven fabric of 100% polypropylene, respectively. Mask S was a multilayer cloth mask designed for sports activity. Mask B and Mask S behave similarly and show lower sound attenuation and sound transmission loss and lower breathing resistance than Mask A, although Mask A exhibits slightly higher filtration efficiency. Similar cheek temperatures were observed for Masks A and B, while a significantly higher temperature was measured with Mask S at incremental physical activity. No differences were found between the masks and the no-mask condition for voice monitoring. Overall, Mask B and Mask S are suitable for sports activities without adverse effects on voice production while ensuring good breathing resistance and filtration efficiency. These outcomes support choosing appropriate masks for sports activities, showing the best trade-off between breathing resistance and filtration efficiency, sound attenuation, and sound transmission loss.


Subject(s)
Masks , Textiles , Bicycling , Filtration , Humans , Male , Respiration
9.
Cancers (Basel) ; 14(10)2022 May 11.
Article in English | MEDLINE | ID: covidwho-1875500

ABSTRACT

Laryngeal carcinoma is the most common malignant tumor of the upper respiratory tract. Total laryngectomy provides complete and permanent detachment of the upper and lower airways that causes the loss of voice, leading to a patient's inability to verbally communicate in the postoperative period. This paper aims to exploit modern areas of deep learning research to objectively classify, extract and measure the substitution voicing after laryngeal oncosurgery from the audio signal. We propose using well-known convolutional neural networks (CNNs) applied for image classification for the analysis of voice audio signal. Our approach takes an input of Mel-frequency spectrogram (MFCC) as an input of deep neural network architecture. A database of digital speech recordings of 367 male subjects (279 normal speech samples and 88 pathological speech samples) was used. Our approach has shown the best true-positive rate of any of the compared state-of-the-art approaches, achieving an overall accuracy of 89.47%.

10.
J Voice ; 2021 Nov 29.
Article in English | MEDLINE | ID: covidwho-1536943

ABSTRACT

OBJECTIVES: World Health Organization declared the coronavirus disease (COVID-19) as a global pandemic on March 11, 2020. The aim of this study was to determine the effectiveness and reliability of voice analysis performed with surgical masks and respirators during the pandemic and to discuss its routine applicability. METHODS: This prospective study included 204 patients who applied to our clinic between the ages of 18 and 55, whose preoperative SARS-Cov-2 PCR tests were negative. Voice analyses were performed on each patient without a mask, with a surgical mask and with a valved face-filtering piece-3 (FFP3) respirator respectively. The F0, shimmer, jitter, s/z ratio, maximum phonation time and harmonic/noise ratio (HNR) values obtained from the voice analyses were compared with each other. RESULTS: No significant difference was found in terms of F0, Jitter, Shimmer, HNR, s/z and maximum phonation time values in the voice analyses performed without a mask and with a surgical mask. With an FFP3, a significant difference was found in only the Shimmer and HNR values compared to the other analysis values. When we look at the data with sex distinction, in the group of female and male patients, when the data of voice analysis obtained in three situations were compared, different results were obtained from the female and male group. CONCLUSION: In conclusion, it should be decided by the physician to perform the voice analysis with a surgical mask or with an FFP3, considering the clinically desired parameters.

11.
Arab J Sci Eng ; : 1-11, 2021 Oct 08.
Article in English | MEDLINE | ID: covidwho-1460508

ABSTRACT

Healthcare sensors represent a valid and non-invasive instrument to capture and analyse physiological data. Several vital signals, such as voice signals, can be acquired anytime and anywhere, achieved with the least possible discomfort to the patient thanks to the development of increasingly advanced devices. The integration of sensors with artificial intelligence techniques contributes to the realization of faster and easier solutions aimed at improving early diagnosis, personalized treatment, remote patient monitoring and better decision making, all tasks vital in a critical situation such as that of the COVID-19 pandemic. This paper presents a study about the possibility to support the early and non-invasive detection of COVID-19 through the analysis of voice signals by means of the main machine learning algorithms. If demonstrated, this detection capacity could be embedded in a powerful mobile screening application. To perform this important study, the Coswara dataset is considered. The aim of this investigation is not only to evaluate which machine learning technique best distinguishes a healthy voice from a pathological one, but also to identify which vowel sound is most seriously affected by COVID-19 and is, therefore, most reliable in detecting the pathology. The results show that Random Forest is the technique that classifies most accurately healthy and pathological voices. Moreover, the evaluation of the vowel /e/ allows the detection of the effects of COVID-19 on voice quality with a better accuracy than the other vowels.

12.
IEEE Access ; 9: 65750-65757, 2021.
Article in English | MEDLINE | ID: covidwho-1225645

ABSTRACT

The Covid-19 pandemic represents one of the greatest global health emergencies of the last few decades with indelible consequences for all societies throughout the world. The cost in terms of human lives lost is devastating on account of the high contagiousness and mortality rate of the virus. Millions of people have been infected, frequently requiring continuous assistance and monitoring. Smart healthcare technologies and Artificial Intelligence algorithms constitute promising solutions useful not only for the monitoring of patient care but also in order to support the early diagnosis, prevention and evaluation of Covid-19 in a faster and more accurate way. On the other hand, the necessity to realise reliable and precise smart healthcare solutions, able to acquire and process voice signals by means of appropriate Internet of Things devices in real-time, requires the identification of algorithms able to discriminate accurately between pathological and healthy subjects. In this paper, we explore and compare the performance of the main machine learning techniques in terms of their ability to correctly detect Covid-19 disorders through voice analysis. Several studies report, in fact, significant effects of this virus on voice production due to the considerable impairment of the respiratory apparatus. Vocal folds oscillations that are more asynchronous, asymmetrical and restricted are observed during phonation in Covid-19 patients. Voice sounds selected by the Coswara database, an available crowd-sourced database, have been e analysed and processed to evaluate the capacity of the main ML techniques to distinguish between healthy and pathological voices. All the analyses have been evaluated in terms of accuracy, sensitivity, specificity, F1-score and Receiver Operating Characteristic area. These show the reliability of the Support Vector Machine algorithm to detect the Covid-19 infections, achieving an accuracy equal to about 97%.

13.
J Med Internet Res ; 23(4): e24191, 2021 04 19.
Article in English | MEDLINE | ID: covidwho-1143363

ABSTRACT

BACKGROUND: During the COVID-19 pandemic, health professionals have been directly confronted with the suffering of patients and their families. By making them main actors in the management of this health crisis, they have been exposed to various psychosocial risks (stress, trauma, fatigue, etc). Paradoxically, stress-related symptoms are often underreported in this vulnerable population but are potentially detectable through passive monitoring of changes in speech behavior. OBJECTIVE: This study aims to investigate the use of rapid and remote measures of stress levels in health professionals working during the COVID-19 outbreak. This was done through the analysis of participants' speech behavior during a short phone call conversation and, in particular, via positive, negative, and neutral storytelling tasks. METHODS: Speech samples from 89 health care professionals were collected over the phone during positive, negative, and neutral storytelling tasks; various voice features were extracted and compared with classical stress measures via standard questionnaires. Additionally, a regression analysis was performed. RESULTS: Certain speech characteristics correlated with stress levels in both genders; mainly, spectral (ie, formant) features, such as the mel-frequency cepstral coefficient, and prosodic characteristics, such as the fundamental frequency, appeared to be sensitive to stress. Overall, for both male and female participants, using vocal features from the positive tasks for regression yielded the most accurate prediction results of stress scores (mean absolute error 5.31). CONCLUSIONS: Automatic speech analysis could help with early detection of subtle signs of stress in vulnerable populations over the phone. By combining the use of this technology with timely intervention strategies, it could contribute to the prevention of burnout and the development of comorbidities, such as depression or anxiety.


Subject(s)
Anxiety/diagnosis , Burnout, Professional/diagnosis , COVID-19/psychology , Health Personnel/psychology , Speech Acoustics , Speech/physiology , Adult , Anxiety/etiology , Anxiety/psychology , Burnout, Professional/etiology , Burnout, Professional/psychology , COVID-19/epidemiology , Female , Humans , Male , Pandemics , Pilot Projects , SARS-CoV-2 , Surveys and Questionnaires , Telephone
14.
J Voice ; 2021 Mar 09.
Article in English | MEDLINE | ID: covidwho-1126961

ABSTRACT

OBJECTIVE: The purpose of our study was to investigate the impact of surgical mask on some vocal parameters such as F0, vocal intensity, jitter, shimmer and harmonics-to-noise ratio in order to understand how surgical mask can affect voice and verbal communication in adults. METHODS: The study was carried out on a selected group of 60 healthy subjects. All subjects were trained to voice a vocal sample of a sustained /a/, at a conversational voice intensity for the Maximum Phonation Time (MPT), wearing the surgical mask and then without wearing the surgical mask. Voice samples were recorded directly in Praat. RESULTS: There were no statistically significant differences in any acoustic parameter between the masked and unmasked condition. There was a non-significant decrease in vocal intensity in 65% of the subjects while wearing a surgical mask. CONCLUSIONS: The statistical comparison carried out between all the acoustic voice parameters observed, extracted wearing and not wearing a surgical mask did not reveal any significant statistical difference. Most of the subjects, after wearing the surgical mask, presented a decrease in vocal intensity measured. Our conclusion was that wearing a mask is likely to induce the unconscious need to increase the vocal effort, resulting over time in a greater risk of developing functional dysphonia. The reduction of intensity can affect also social interaction and speech audibility, especially for individuals with hearing loss.

15.
Acta Otorhinolaryngol Ital ; 41(1): 1-5, 2021 Feb.
Article in English | MEDLINE | ID: covidwho-940622

ABSTRACT

OBJECTIVE: Among the different procedures used by the ENT, acoustic analysis of voice has become widely used for correct diagnosis of dysphonia. The instrumental measurements of acoustic parameters were limited during the COVID-19 pandemic by the common belief that a face mask affects the results of the analysis. The purpose of our study was to investigate the impact of surgical masks on F0, jitter, shimmer and harmonics-to-noise ratio (HNR) in adults. METHODS: The study was carried out on a selected group of 50 healthy subjects. Voice samples were recorded directly in Praat. All subjects were trained to voice a vocal sample of a sustained /a/, at a conversational voice intensity, with no intensity or frequency variation, for the Maximum Phonation Time (MPT), wearing the surgical mask and then without wearing the surgical mask. RESULTS: None of the variations in acoustic voice analysis detected wearing a surgical mask and not wearing a surgical mask were statistically significant. CONCLUSIONS: Our study demonstrates that the acoustic voice analysis procedure can continue to be performed with the use of a surgical mask for the patient, even during the COVID-19 pandemic.


Subject(s)
COVID-19/complications , Dysphonia/etiology , Masks/adverse effects , Speech Acoustics , Voice Quality , Adult , Aged , COVID-19/diagnosis , Dysphonia/diagnosis , Female , Humans , Male , Middle Aged , Phonation , Sound Spectrography
SELECTION OF CITATIONS
SEARCH DETAIL